|
Regulatory Feedback Networks describe a class of neural networks related to Virtual Lateral Inhibition (named to distinguish it from lateral inhibition) that perform inference using negative feedback.〔J. Reggia, “Virtual lateral inhibition in parallel activation models of associative memory,” in Proc. 9th International Joint Conference on Artificial Intelligence., Aug. 1985, pp. 244-248.〕〔Mcfadden, F. E. (1995). "Convergence of Competitive Activation Models Based on Virtual Lateral Inhibition." Neural Networks 8(6): 865-875.〕〔name=first>Achler, T. (2002). Input Shunt Networks. Neurocomputing, 44, 249-255.〕 The feedback is implemented during recognition and during recognition connectivity parameters are not changed. Thus this is completely separate from learning/training (e.g. supervised learning or unsupervised learning). This is also different from models of spatial attention. Instead, these networks determine the relevance of inputs through a "conservation of information principle". == How the network functions == The computational basis of conservation of information is that an input should not pass more information than is justified to the next layer. Thus inputs are regulated by the outputs they activate. Subsequently, each input’s contribution (i.e. (salience )) is adjusted through feedback regulation by its associated outputs. The amplitudes of the adjusted inputs are propagated to the output layer. A new salience is re-evaluated based on the new output activity (through feedback). This can be iterated until the networks reach steady state.〔 At every step, the role of salience is to maintain the relation where: the total activity of outputs connected to an input will be equivalent to the input’s amplitude.〔name=first />〔Achler T., Amir E., “Input Feedback Networks: Classification and Inference Based on Network Structure” Artificial General Intelligence 2008 (pdf )〕 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Regulatory feedback network」の詳細全文を読む スポンサード リンク
|